1,783 research outputs found

    A Review of the Mass Measurement Techniques proposed for the Large Hadron Collider

    Full text link
    We review the methods which have been proposed for measuring masses of new particles at the Large Hadron Collider paying particular attention to the kinematical techniques suitable for extracting mass information when invisible particles are expected.Comment: 72 pages - in form to be published in JPhys

    Constrained invariant mass distributions in cascade decays. The shape of the "mqllm_{qll}-threshold" and similar distributions

    Full text link
    Considering the cascade decay D→cC→cbB→cbaAD\to c C \to c b B \to c b a A in which D,C,B,AD,C,B,A are massive particles and c,b,ac,b,a are massless particles, we determine for the first time the shape of the distribution of the invariant mass of the three massless particles mabcm_{abc} for the sub-set of decays in which the invariant mass mabm_{ab} of the last two particles in the chain is (optionally) constrained to lie inside an arbitrary interval, mab∈[mabcut min,mabcut max]m_{ab} \in [ m_{ab}^\text{cut min}, m_{ab}^\text{cut max}]. An example of an experimentally important distribution of this kind is the ``mqllm_{qll} threshold'' -- which is the distribution of the combined invariant mass of the visible standard model particles radiated from the hypothesised decay of a squark to the lightest neutralino via successive two body decay,: \squark \to q \ntlinoTwo \to q l \slepton \to q l l \ntlinoOne , in which the experimenter requires additionally that mllm_{ll} be greater than mllmax/2{m_{ll}^{max}}/\sqrt{2}. The location of the ``foot'' of this distribution is often used to constrain sparticle mass scales. The new results presented here permit the location of this foot to be better understood as the shape of the distribution is derived. The effects of varying the position of the mllm_{ll} cut(s) may now be seen more easily.Comment: 12 pages, 3 figure

    Improving estimates of the number of fake leptons and other mis-reconstructed objects in hadron collider events: BoB's your UNCLE. (Previously "The Matrix Method Reloaded")

    Get PDF
    We consider current and alternative approaches to setting limits on new physics signals having backgrounds from misidentified objects; for example jets misidentified as leptons, b-jets or photons. Many ATLAS and CMS analyses have used a heuristic matrix method for estimating the background contribution from such sources. We demonstrate that the matrix method suffers from statistical shortcomings that can adversely affect its ability to set robust limits. A rigorous alternative method is discussed, and is seen to produce fake rate estimates and limits with better qualities, but is found to be too costly to use. Having investigated the nature of the approximations used to derive the matrix method, we propose a third strategy that is seen to marry the speed of the matrix method to the performance and physicality of the more rigorous approach.Comment: v1 :11 pages, 5 figures. v2: title change requested by referee, and other corrections/clarifications found during review. v3: final tweaks suggested during review + move from revtex to jhep styl

    Efficient simulation techniques for biochemical reaction networks

    Full text link
    Discrete-state, continuous-time Markov models are becoming commonplace in the modelling of biochemical processes. The mathematical formulations that such models lead to are opaque, and, due to their complexity, are often considered analytically intractable. As such, a variety of Monte Carlo simulation algorithms have been developed to explore model dynamics empirically. Whilst well-known methods, such as the Gillespie Algorithm, can be implemented to investigate a given model, the computational demands of traditional simulation techniques remain a significant barrier to modern research. In order to further develop and explore biologically relevant stochastic models, new and efficient computational methods are required. In this thesis, high-performance simulation algorithms are developed to estimate summary statistics that characterise a chosen reaction network. The algorithms make use of variance reduction techniques, which exploit statistical properties of the model dynamics, to improve performance. The multi-level method is an example of a variance reduction technique. The method estimates summary statistics of well-mixed, spatially homogeneous models by using estimates from multiple ensembles of sample paths of different accuracies. In this thesis, the multi-level method is developed in three directions: firstly, a nuanced implementation framework is described; secondly, a reformulated method is applied to stiff reaction systems; and, finally, different approaches to variance reduction are implemented and compared. The variance reduction methods that underpin the multi-level method are then re-purposed to understand how the dynamics of a spatially-extended Markov model are affected by changes in its input parameters. By exploiting the inherent dynamics of spatially-extended models, an efficient finite difference scheme is used to estimate parametric sensitivities robustly.Comment: Doctor of Philosophy thesis submitted at the University of Oxford. This research was supervised by Prof Ruth E. Baker and Dr Christian A. Yate

    Re-weighing the evidence for a Higgs boson in dileptonic W-boson decays

    Full text link
    We reconsider observables for discovering and measuring the mass of a Higgs boson via its di-leptonic decays: H --> WW* --> l nu l nu. We define an observable generalizing the transverse mass that takes into account the fact that one of the intermediate W-bosons is likely to be on-shell. We compare this new variable with existing ones and argue that it gives a significant improvement for discovery in the region m_h < 2 m_W.Comment: 3 pages, 2 figures. Changes in v2: (i) implemented a model of detector smearing, (ii) switched LHC simulation from 14 TeV to 7 TeV running, (iii) presenting results for 10 rather than 3 inverse femtobarns, (iv) corrected a typo in Fig 2 legend. Changes in v3: included published erratu

    A Search for the Higgs Boson Produced in Association With Top Quarks in Multilepton Final States at Atlas

    Get PDF
    This thesis presents preliminary results of a search for Higgs boson production in association with top quarks in multilepton final states. The search was conducted in the 2012 dataset of proton-proton collisions delivered by the CERN Large Hadron Collider at a center-of-mass energy of 8 TeV and collected by the ATLAS experiment. The dataset corresponds to an integrated luminosity of 20.3 inverse femtobarns. The analysis is conducted by measuring event counts in signal regions distinguished by the number of leptons (2 same-sign, 3, and 4), jets and b-tagged jets present in the reconstructed events. The observed events in the signal regions constitute an excess over the expected number of background events. The results are evaluated using a frequentist statistical model. The observed exclusion upper limit at the 95 % confidence level is 5.50 times the predicted Standard Model production cross section for Higgs production in association with top quarks. The fitted value of the ratio of the observed production rate to the expected Standard Model production rate is 2.83 ++ 1.58 −- 1.35
    • 

    corecore